HubIA's DGX

The general purpose of HubIA is to offer CentraleSupélec students a platform for AI computations on GPUs.

This platform is open to all CentraleSupélec students after account creation and request validation.

The platform is based on an NVIDIA DGX Station A100 with NVIDIA A100 GPUs and GPU partitioning (MIG), so several users can run jobs concurrently with controlled resource allocation.

Workloads are managed by Slurm, and access is done through SSH.

How it works

  • You connect to the DGX with SSH.
  • You submit compute jobs with Slurm (srun for interactive sessions, sbatch for batch jobs).
  • Slurm allocates the requested GPU resources and starts your job when resources are available.
  • You monitor and manage jobs from the command line.

If you need support with the DGX, you can contact us at dgx_support@listes.centralesupelec.fr

For account requests, students must send the form to dgx_support@listes.centralesupelec.fr and CC their project supervisor. See Account creation instructions and Connection and file transfer.

Platform information

As of February 21, 2026:

  • DGX OS: NVIDIA DGX Station A100 7.4.0 (build date: 2026-01-26-17-16-49, commit: 3affc2a)
  • Base OS: Ubuntu 24.04.4 LTS
  • Kernel: 6.8.0-100-generic
  • NVIDIA driver: 580.126.16

GPU resources (summary)

You can run jobs on different GPU memory sizes (VRAM):

  • 10 GB VRAM (standard GPU slice): 7 × 1g.10gb (interactive10, prod10)
  • 40 GB VRAM (large GPU slice): 2 × 3g.40gb (prod40)
  • 80 GB VRAM (full GPU): 2 × A100 80GB (prod80)

See GPU and MIG layout for details.